Convergence Rates of Proximal Gradient Methods via the Convex Conjugate

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the s...

متن کامل

Convergence Properties of Nonlinear Conjugate Gradient Methods

Recently, important contributions on convergence studies of conjugate gradient methods have been made by Gilbert and Nocedal [6]. They introduce a “sufficient descent condition” to establish global convergence results, whereas this condition is not needed in the convergence analyses of Newton and quasi-Newton methods, [6] hints that the sufficient descent condition, which was enforced by their ...

متن کامل

Generalized Conjugate Gradient Methods for $\ell_1$ Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an efficient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the l1-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a finite number of iterations. At each iteration, our methods first identify a face of an orthan...

متن کامل

Generalized Conjugate Gradient Methods for ℓ1 Regularized Convex Quadratic Programming with Finite Convergence

The conjugate gradient (CG) method is an e cient iterative method for solving large-scale strongly convex quadratic programming (QP). In this paper we propose some generalized CG (GCG) methods for solving the `1-regularized (possibly not strongly) convex QP that terminate at an optimal solution in a nite number of iterations. At each iteration, our methods rst identify a face of an orthant and ...

متن کامل

Convex risk minimization via proximal splitting methods

In this paper we investigate the applicability of a recently introduced primal-dual splitting method in the context of solving portfolio optimization problems which assume the minimization of risk measures associated to different convex utility functions. We show that, due to the splitting characteristic of the used primal-dual method, the main effort in implementing it constitutes in the calcu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Optimization

سال: 2019

ISSN: 1052-6234,1095-7189

DOI: 10.1137/18m1164329